AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-task Pre-finetuning

# Multi-task Pre-finetuning

Muppet Roberta Base
MIT
A large-scale multi-task representation model achieved through pre-finetuning, based on the RoBERTa architecture, outperforming the original roberta-base on GLUE and question answering tasks
Large Language Model Transformers English
M
facebook
425
6
Muppet Roberta Large
MIT
A large-scale multi-task pre-finetuned version of the RoBERTa-large model, excelling in GLUE and question-answering tasks, with significant improvements especially on small datasets.
Large Language Model Transformers English
M
facebook
26
13
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase